Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
32B inference optimization
# 32B inference optimization
AM Thinking V1
Apache-2.0
A 32-billion-parameter dense language model focused on enhancing reasoning capabilities, built upon Qwen 2.5-32B-Base, demonstrating performance comparable to larger MoE models in reasoning benchmarks.
Large Language Model
Transformers
A
a-m-team
1,377
153
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase